Terabytes of data are collected every day by wind turbine manufacturers from their fleets. The data contain valuable real-time information for turbine health diagnostics and performance monitoring, for predicting rare failures and the remaining service life of critical parts. And yet, this wealth of data from wind turbine fleets remains inaccessible to operators, utility companies, and researchers as manufacturing companies prefer the privacy of their fleets' turbine data for business strategic reasons. The lack of data access impedes the exploitation of opportunities, such as improving data-driven turbine operation and maintenance strategies and reducing downtimes. We present a distributed federated machine learning approach that leaves the data on the wind turbines to preserve the data privacy, as desired by manufacturers, while still enabling fleet-wide learning on those local data. We demonstrate in a case study that wind turbines which are scarce in representative training data benefit from more accurate fault detection models with federated learning, while no turbine experiences a loss in model performance by participating in the federated learning process. When comparing conventional and federated training processes, the average model training time rises significantly by a factor of 7 in the federated training due to increased communication and overhead operations. Thus, model training times might constitute an impediment that needs to be further explored and alleviated in federated learning applications, especially for large wind turbine fleets.
translated by 谷歌翻译
As part of the MediaEval 2022 Predicting Video Memorability task we explore the relationship between visual memorability, the visual representation that characterises it, and the underlying concept portrayed by that visual representation. We achieve state-of-the-art memorability prediction performance with a model trained and tested exclusively on surrogate dream images, elevating concepts to the status of a cornerstone memorability feature, and finding strong evidence to suggest that the intrinsic memorability of visual content can be distilled to its underlying concept or meaning irrespective of its specific visual representational.
translated by 谷歌翻译
This paper describes the 5th edition of the Predicting Video Memorability Task as part of MediaEval2022. This year we have reorganised and simplified the task in order to lubricate a greater depth of inquiry. Similar to last year, two datasets are provided in order to facilitate generalisation, however, this year we have replaced the TRECVid2019 Video-to-Text dataset with the VideoMem dataset in order to remedy underlying data quality issues, and to prioritise short-term memorability prediction by elevating the Memento10k dataset as the primary dataset. Additionally, a fully fledged electroencephalography (EEG)-based prediction sub-task is introduced. In this paper, we outline the core facets of the task and its constituent sub-tasks; describing the datasets, evaluation metrics, and requirements for participant submissions.
translated by 谷歌翻译
The Predicting Media Memorability task in the MediaEval evaluation campaign has been running annually since 2018 and several different tasks and data sets have been used in this time. This has allowed us to compare the performance of many memorability prediction techniques on the same data and in a reproducible way and to refine and improve on those techniques. The resources created to compute media memorability are now being used by researchers well beyond the actual evaluation campaign. In this paper we present a summary of the task, including the collective lessons we have learned for the research community.
translated by 谷歌翻译
我们通过在预测视频记忆力的任务中对视觉变压器进行了微调,调查了流行的犯罪剧电视连续剧CSI的5季跨度的记忆。通过使用详细的注释语料库结合视频记忆性分数来调查犯罪戏剧电视的流行类型,我们展示了如何从视频拍摄中产生的记忆性分数中推断出含义。我们执行定量分析,将视频拍摄的记忆与节目的各个方面相关联。我们在本文中提供的见解说明了视频记忆力在应用程序中使用多媒体在教育,市场营销,索引以及在这里的情况下的重要性的重要性,即电视和电影制作。
translated by 谷歌翻译
本文介绍了我们对Mediaeval 2021中预测媒体难忘任务的方法,该任务旨在通过设置自动预测视频难忘的任务来解决媒体难忘的问题。今年,我们从比较角度解决任务,寻求深入了解三个探索的模式,并使用去年提交的结果(2020年)作为参考点。我们在TRECVID2019 DataSet上测试的最佳性交短期难忘模型(0.132)就像去年一样 - 是基于帧的CNN,没有接受任何TRECVID数据培训,以及我们测试的最佳短期难忘性模型(0.524)在Memento10k DataSet上,是一个贝叶斯乘坐回归,符合Densenet121视觉功能。
translated by 谷歌翻译
本文介绍了预测媒体难忘性的Mediaeval 2021,这是今年第4版的任务,因为短期和长期视频难忘性的预测仍然是一个具有挑战性的任务。在2021年,使用两个视频数据集:第一,TRECVID 2019视频到文本数据集的子集;其次,Memento10K数据集是为了提供探索交叉数据集泛化的机会。另外,介绍了基于脑电图(EEG)的预测导频子任务。在本文中,我们概述了任务的主要方面,并描述了参与者提交的数据集,评估指标和要求。
translated by 谷歌翻译
使用公共可用链路的集合,平均每周6秒的视频剪辑,每次,1,275用户多次手动注释每个视频,以指示视频的长期和短期难忘性。注释作为在线记忆游戏的一部分,并测量了参与者在显示视频的集合时先前召回过视频的能力。在前几分钟内看到的视频进行识别任务,以进行短期令人难忘,以便在前24到72小时内进行长期难忘。数据包括每个视频的每个识别的反应时间。与每个视频相关联是文本描述(标题)以及应用于从每个视频中提取的3帧的图像级别功能集合(开始,中间和结束)。还提供了视频级功能。数据集在视频难忘任务中使用,作为2020年的Mediaeval基准的一部分。
translated by 谷歌翻译